Recent years we have witnessed rapid development in NeRF-based image rendering due to its high quality. However, point clouds rendering is somehow less explored. Compared to NeRF-based rendering which suffers from dense spatial sampling, point clouds rendering is naturally less computation intensive, which enables its deployment in mobile computing device. In this work, we focus on boosting the image quality of point clouds rendering with a compact model design. We first analyze the adaption of the volume rendering formulation on point clouds. Based on the analysis, we simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel. Further, motivated by ray marching, we rectify the the noisy raw point clouds to the estimated intersection between rays and surfaces as queried coordinates, which could avoid \textit{spatial frequency collapse} and neighbor point disturbance. Composed of rasterization, spatial mapping and the refinement stages, our method achieves the state-of-the-art performance on point clouds rendering, outperforming prior works by notable margins, with a smaller model size. We obtain a PSNR of 31.74 on NeRF-Synthetic, 25.88 on ScanNet and 30.81 on DTU. Code and data are publicly available at https://github.com/seanywang0408/RadianceMapping.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Real-time individual endpoint prediction has always been a challenging task but of great clinic utility for both patients and healthcare providers. With 6,879 chronic kidney disease stage 4 (CKD4) patients as a use case, we explored the feasibility and performance of gated recurrent units with decay that models Weibull probability density function (GRU-D-Weibull) as a semi-parametric longitudinal model for real-time individual endpoint prediction. GRU-D-Weibull has a maximum C-index of 0.77 at 4.3 years of follow-up, compared to 0.68 achieved by competing models. The L1-loss of GRU-D-Weibull is ~66% of XGB(AFT), ~60% of MTLR, and ~30% of AFT model at CKD4 index date. The average absolute L1-loss of GRU-D-Weibull is around one year, with a minimum of 40% Parkes serious error after index date. GRU-D-Weibull is not calibrated and significantly underestimates true survival probability. Feature importance tests indicate blood pressure becomes increasingly important during follow-up, while eGFR and blood albumin are less important. Most continuous features have non-linear/parabola impact on predicted survival time, and the results are generally consistent with existing knowledge. GRU-D-Weibull as a semi-parametric temporal model shows advantages in built-in parameterization of missing, native support for asynchronously arrived measurement, capability of output both probability and point estimates at arbitrary time point for arbitrary prediction horizon, improved discrimination and point estimate accuracy after incorporating newly arrived data. Further research on its performance with more comprehensive input features, in-process or post-process calibration are warranted to benefit CKD4 or alike terminally-ill patients.
translated by 谷歌翻译
Aspect or query-based summarization has recently caught more attention, as it can generate differentiated summaries based on users' interests. However, the current dataset for aspect or query-based summarization either focuses on specific domains, contains relatively small-scale instances, or includes only a few aspect types. Such limitations hinder further explorations in this direction. In this work, we take advantage of crowd-sourcing knowledge on Wikipedia.org and automatically create a high-quality, large-scale open-domain aspect-based summarization dataset named OASum, which contains more than 3.7 million instances with around 1 million different aspects on 2 million Wikipedia pages. We provide benchmark results on OAsum and demonstrate its ability for diverse aspect-based summarization generation. To overcome the data scarcity problem on specific domains, we also perform zero-shot, few-shot, and fine-tuning on seven downstream datasets. Specifically, zero/few-shot and fine-tuning results show that the model pre-trained on our corpus demonstrates a strong aspect or query-focused generation ability compared with the backbone model. Our dataset and pre-trained checkpoints are publicly available.
translated by 谷歌翻译
Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.
translated by 谷歌翻译
磁共振成像(MRI)图像中的小病变对于多种疾病的临床诊断至关重要。但是,MRI质量很容易被各种噪声降解,这可以极大地影响小病变的诊断准确性。尽管已经提出了一些用于降级MR图像的方法,但缺乏提高特定于任务的降级方法来提高小病变的诊断信心。在这项工作中,我们建议通过体素杂种残留MLP-CNN模型来降低具有小病变的三维(3D)MR图像。我们结合了基本的深度学习体系结构MLP和CNN,以获得适当的固有偏差,以通过添加残差连接来利用远距离信息,以使图像降低并整合MLP和CNN中的每个输出层。我们在720 T2-Flair脑图像上评估了所提出的方法,其在不同的噪声水平下具有较小的病变。结果表明,与最先进的方法相比,在定量和视觉评估中,我们的方法在测试数据集上具有优势。此外,两名经验丰富的放射科医生同意,在中等和高噪声水平下,我们的方法在恢复小病变和整体图像质量方面优于其他方法。我们的方法的实现可在https://github.com/laowangbobo/Residual_MLP_CNN_MIXER上获得。
translated by 谷歌翻译
非平行的多域语音转换方法(例如Stargan-VC)在许多情况下已被广泛应用。但是,这些模型的培训通常由于其复杂的对抗网络体系结构而构成挑战。为了解决这个问题,在这项工作中,我们利用最先进的对比学习技术,并将有效的暹罗网络结构纳入Stargan歧视者。我们的方法称为Simsiam-Stargan-VC,它提高了训练稳定性,并有效地防止了训练过程中的歧视者过度拟合问题。我们对语音转换挑战(VCC 2018)数据集进行了实验,并进行了用户研究,以验证我们的框架性能。我们的实验结果表明,Simsiam-Stargan-VC在客观和主观指标方面显着优于现有的Stargan-VC方法。
translated by 谷歌翻译
实时动态环境感知对于拥挤空间的自动机器人至关重要。尽管流行的基于体素的映射方法可以有效地用任意复杂的形状代表3D障碍,但它们几乎无法区分静态和动态障碍,从而导致避免障碍物的性能有限。尽管在自动驾驶中存在大量基于学习的动态障碍检测算法,但四轮驱动器的有限计算资源无法使用这些方法实现实时性能。为了解决这些问题,我们为使用RGB-D摄像机提出了一个实时动态障碍物跟踪和映射系统,以避免四肢障碍物。拟议的系统首先利用带有占用体素图的深度图像来生成潜在的动态障碍区域作为建议。通过障碍区域建议,Kalman滤波器和我们的连续性过滤器将应用于跟踪每个动态障碍物。最后,使用追踪动态障碍的状态基于马尔可夫链提出了环境感知的轨迹预测方法。我们使用定制的四轮驱动器和导航计划者实施了建议的系统。仿真和物理实验表明,我们的方法可以成功地跟踪和代表动态环境中的障碍,并安全地避免障碍。
translated by 谷歌翻译
标记级别的高清地图(HD地图)对自动驾驶汽车具有重要意义,尤其是在大规模,外观改变的情况下,自动驾驶汽车依靠标记来定位和车道来安全驾驶。在本文中,我们提出了一个高度可行的框架,用于使用简单的传感器设置(一个或多个单眼摄像机)自动构建标记级别的高清图。我们优化标记角的位置,以适合标记分割的结果,并同时优化相应摄像机的反视角映射(IPM)矩阵,以获得从前视图图像到鸟类视图(BEV)的准确转换。在定量评估中,构建的高清图几乎达到了百厘厘米级的准确性。优化的IPM矩阵的准确性与手动校准相似。该方法还可以概括以通过增加可识别标记的类型来从更广泛的意义上构建高清图。
translated by 谷歌翻译